基于CNN的大多数超分辨率(SR)方法假设降解是已知的(\ eg,bicubic)。当降解与假设不同时,这些方法将遭受严重的性能下降。因此,一些方法试图通过多种降解的复杂组合来培训SR网络,以涵盖实际的降解空间。为了适应多个未知降解,引入显式降解估计器实际上可以促进SR性能。然而,以前的显式降解估计方法通常可以通过对地面模糊内核的监督来预测高斯的模糊,并且估计错误可能导致SR失败。因此,有必要设计一种可以提取隐式歧视性降解表示的方法。为此,我们提出了一个基于元学习的区域退化意识SR网络(MRDA),包括元学习网络(MLN),降级提取网络(DEN)和区域退化意识SR Network(RDAN)。为了处理缺乏地面污染的降解,我们使用MLN在几次迭代后快速适应特定的复合物降解并提取隐式降解信息。随后,教师网络MRDA $ _ {T} $旨在进一步利用MLN为SR提取的降解信息。但是,MLN需要在配对的低分辨率(LR)和相应的高分辨率(HR)图像上进行迭代,这在推理阶段不可用。因此,我们采用知识蒸馏(KD)来使学生网络学会直接提取与LR图像的老师相同的隐式退化表示(IDR)。
translated by 谷歌翻译
时空视频超分辨率(STVSR)的目标是增加低分辨率(LR)和低帧速率(LFR)视频的空间分辨率。基于深度学习的最新方法已取得了重大改进,但是其中大多数仅使用两个相邻帧,即短期功能,可以合成缺失的框架嵌入,这无法完全探索连续输入LR帧的信息流。此外,现有的STVSR模型几乎无法明确利用时间上下文以帮助高分辨率(HR)框架重建。为了解决这些问题,在本文中,我们提出了一个称为STDAN的可变形注意网络。首先,我们设计了一个长短的术语特征插值(LSTFI)模块,该模块能够通过双向RNN结构从更相邻的输入帧中挖掘大量的内容,以进行插值。其次,我们提出了一个空间 - 周期性变形特征聚合(STDFA)模块,其中动态视频框架中的空间和时间上下文被自适应地捕获并汇总以增强SR重建。几个数据集的实验结果表明,我们的方法的表现优于最先进的STVSR方法。该代码可在https://github.com/littlewhitesea/stdan上找到。
translated by 谷歌翻译
基于参考的超分辨率(REFSR)在使用外部参考(REF)图像产生现实纹理方面取得了重大进展。然而,现有的REFSR方法可以获得与输入大小一起消耗二次计算资源的高质量对应匹配,限制其应用程序。此外,这些方法通常遭受低分辨率(LR)图像和REF图像之间的比例错位。在本文中,我们提出了一种加速的多尺度聚合网络(AMSA),用于基于参考的超分辨率,包括粗略嵌入式斑块(CFE-PACKPMATCH)和多尺度动态聚合(MSDA)模块。为了提高匹配效率,我们设计一种具有随机样本传播的新型嵌入式PACKMTH方案,其涉及具有渐近线性计算成本的端到端训练到输入大小。为了进一步降低计算成本和加速会聚,我们在构成CFE-PACKMATCH的嵌入式PACKMACTH上应用了粗略策略。为了完全利用跨多个尺度的参考信息并增强稳定性的稳定性,我们开发由动态聚合和多尺度聚合组成的MSDA模块。动态聚合通过动态聚合特征来纠正轻微比例的错位,并且多尺度聚合通过融合多尺度信息来为大规模错位带来鲁棒性。实验结果表明,该拟议的AMSA对定量和定性评估的最先进方法实现了卓越的性能。
translated by 谷歌翻译
非本地注意力(NLA)通过利用自然图像中的内在特征相关性来带来单幅图像超分辨率(SISR)的显着改进。然而,NLA提供嘈杂的信息大量的权重,并且相对于输入大小消耗二次计算资源,限制其性能和应用。在本文中,我们提出了一种新的高效非局部对比度注意(Enca),以执行远程视觉建模并利用更相关的非局部特征。具体而言,Enca由两部分组成,有效的非本地注意力(Enla)和稀疏聚合。 ENLA采用内核方法来近似指数函数并获得线性计算复杂度。对于稀疏聚合,我们通过放大因子乘以专注于信息特征的输入,但近似的方差呈指数增加。因此,应用对比学习以进一步分离相关和无关的特征。为了展示Enca的有效性,我们通过在简单的骨干中添加一些模块来构建称为有效的非本地对比网络(ENLCN)的架构。广泛的实验结果表明,Enlcn对定量和定性评估的最先进方法达到了卓越的性能。
translated by 谷歌翻译
几次拍摄的语义分割旨在将新颖的类对象分段为仅具有少数标记的支持图像。大多数高级解决方案利用度量学习框架,通过将每个查询功能与学习的类特定的原型匹配来执行分段。然而,由于特征比较不完整,该框架遭受了偏见的分类。为了解决这个问题,我们通过引入类别特定的和类别不可知的原型来提出自适应原型表示,从而构建与查询功能学习语义对齐的完整样本对。互补特征学习方式有效地丰富了特征比较,并有助于在几次拍摄设置中产生一个非偏见的分段模型。它用双分支端到端网络(\即,特定于类分支和类别不可知分支)实现,它生成原型,然后组合查询特征以执行比较。此外,所提出的类别无神不可话的分支简单而且有效。在实践中,它可以自适应地为查询图像生成多种类别 - 不可知的原型,并以自我对比方式学习特征对齐。广泛的Pascal-5 $ ^ i $和Coco-20 $ ^ i $展示了我们方法的优越性。在不牺牲推理效率的费用中,我们的模型实现了最先进的,导致1-Shot和5-Shot Settings进行语义分割。
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
General nonlinear sieve learnings are classes of nonlinear sieves that can approximate nonlinear functions of high dimensional variables much more flexibly than various linear sieves (or series). This paper considers general nonlinear sieve quasi-likelihood ratio (GN-QLR) based inference on expectation functionals of time series data, where the functionals of interest are based on some nonparametric function that satisfy conditional moment restrictions and are learned using multilayer neural networks. While the asymptotic normality of the estimated functionals depends on some unknown Riesz representer of the functional space, we show that the optimally weighted GN-QLR statistic is asymptotically Chi-square distributed, regardless whether the expectation functional is regular (root-$n$ estimable) or not. This holds when the data are weakly dependent beta-mixing condition. We apply our method to the off-policy evaluation in reinforcement learning, by formulating the Bellman equation into the conditional moment restriction framework, so that we can make inference about the state-specific value functional using the proposed GN-QLR method with time series data. In addition, estimating the averaged partial means and averaged partial derivatives of nonparametric instrumental variables and quantile IV models are also presented as leading examples. Finally, a Monte Carlo study shows the finite sample performance of the procedure
translated by 谷歌翻译
This paper presents a safety-critical locomotion control framework for quadrupedal robots. Our goal is to enable quadrupedal robots to safely navigate in cluttered environments. To tackle this, we introduce exponential Discrete Control Barrier Functions (exponential DCBFs) with duality-based obstacle avoidance constraints into a Nonlinear Model Predictive Control (NMPC) with Whole-Body Control (WBC) framework for quadrupedal locomotion control. This enables us to use polytopes to describe the shapes of the robot and obstacles for collision avoidance while doing locomotion control of quadrupedal robots. Compared to most prior work, especially using CBFs, that utilize spherical and conservative approximation for obstacle avoidance, this work demonstrates a quadrupedal robot autonomously and safely navigating through very tight spaces in the real world. (Our open-source code is available at github.com/HybridRobotics/quadruped_nmpc_dcbf_duality, and the video is available at youtu.be/p1gSQjwXm1Q.)
translated by 谷歌翻译
Video semantic segmentation (VSS) is beneficial for dealing with dynamic scenes due to the continuous property of the real-world environment. On the one hand, some methods alleviate the predicted inconsistent problem between continuous frames. On the other hand, other methods employ the previous frame as the prior information to assist in segmenting the current frame. Although the previous methods achieve superior performances on the independent and identically distributed (i.i.d) data, they can not generalize well on other unseen domains. Thus, we explore a new task, the video generalizable semantic segmentation (VGSS) task that considers both continuous frames and domain generalization. In this paper, we propose a class-wise non-salient region generalized (CNSG) framework for the VGSS task. Concretely, we first define the class-wise non-salient feature, which describes features of the class-wise non-salient region that carry more generalizable information. Then, we propose a class-wise non-salient feature reasoning strategy to select and enhance the most generalized channels adaptively. Finally, we propose an inter-frame non-salient centroid alignment loss to alleviate the predicted inconsistent problem in the VGSS task. We also extend our video-based framework to the image-based generalizable semantic segmentation (IGSS) task. Experiments demonstrate that our CNSG framework yields significant improvement in the VGSS and IGSS tasks.
translated by 谷歌翻译
In this paper, we improve the kernel alignment regret bound for online kernel learning in the regime of the Hinge loss function. Previous algorithm achieves a regret of $O((\mathcal{A}_TT\ln{T})^{\frac{1}{4}})$ at a computational complexity (space and per-round time) of $O(\sqrt{\mathcal{A}_TT\ln{T}})$, where $\mathcal{A}_T$ is called \textit{kernel alignment}. We propose an algorithm whose regret bound and computational complexity are better than previous results. Our results depend on the decay rate of eigenvalues of the kernel matrix. If the eigenvalues of the kernel matrix decay exponentially, then our algorithm enjoys a regret of $O(\sqrt{\mathcal{A}_T})$ at a computational complexity of $O(\ln^2{T})$. Otherwise, our algorithm enjoys a regret of $O((\mathcal{A}_TT)^{\frac{1}{4}})$ at a computational complexity of $O(\sqrt{\mathcal{A}_TT})$. We extend our algorithm to batch learning and obtain a $O(\frac{1}{T}\sqrt{\mathbb{E}[\mathcal{A}_T]})$ excess risk bound which improves the previous $O(1/\sqrt{T})$ bound.
translated by 谷歌翻译